I don’t even understand if you’re saying that AI risk is more or less than EY thinks it is (I guess less, since you’re saying he intended to give you evidence that would make the severity+urgency seem higher, but it had the opposite effect).
I think you’re saying the FAI approach suggested by EY is a mistake. I agree that people should definitely not assume his views as a starting point for their own exploration, but instead take whatever evidence is presented. I’m pretty sure there are other lines of development that will connect to an AI result (maybe FAI or “tool”-AI) first, whether you want them to or not. So people who are wise stewards of risk should also explore freely.
I don’t even understand if you’re saying that AI risk is more or less than EY thinks it is (I guess less, since you’re saying he intended to give you evidence that would make the severity+urgency seem higher, but it had the opposite effect).
I think you’re saying the FAI approach suggested by EY is a mistake. I agree that people should definitely not assume his views as a starting point for their own exploration, but instead take whatever evidence is presented. I’m pretty sure there are other lines of development that will connect to an AI result (maybe FAI or “tool”-AI) first, whether you want them to or not. So people who are wise stewards of risk should also explore freely.